Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
UMCS tree based hybrid similarity measure of UML class diagram
Zhongchen YUAN, Zongmin MA
Journal of Computer Applications    2024, 44 (3): 883-889.   DOI: 10.11772/j.issn.1001-9081.2022111702
Abstract85)   HTML4)    PDF (2820KB)(84)       Save

Software reuse is to retrieve previously developed software artifacts from a repository according to given conditions. The retrieval is based on similarity measure. UML (Unified Modeling Language) class diagram is widely applied to software design, and its reuse as a core of software design reuse has attracted much attention. Therefore, the research on the similarity of UML class diagrams was carried out. UML class diagram contains semantic and structural contents. At present, the similarity research of UML class diagrams mainly focuses on semantics, and there are also some discussions on structural similarity, but the combination of semantics and structure has not been considered. Therefore, a hybrid similarity measure combining semantics and structure was proposed. Due to the non-formal nature of UML class diagram, the UML class diagram was transformed into a graph model for similarity measure, the Maximum Common Subgraph List (MCSL) was searched, a Maximum Common Subgraph (MCS) tree was created based on MCSL, and a hybrid similarity measure method was proposed based on MCS sequence. The semantic matching and structural matching were defined corresponding to concept and structure common subgraphs, respectively. The similarity comparison and similarity based classification quality comparison experiments were carried out, and the experimental results validate the advantages of the proposed method.

Table and Figures | Reference | Related Articles | Metrics
Reverse hybrid access control scheme based on object attribute matching in cloud computing environment
GE Lina, HU Yugu, ZHANG Guifen, CHEN Yuanyuan
Journal of Computer Applications    2021, 41 (6): 1604-1610.   DOI: 10.11772/j.issn.1001-9081.2020121954
Abstract246)      PDF (1071KB)(270)       Save
Cloud computing improves the efficiency of the use, analysis and management of big data, but also brings the worry of data security and private information disclosure of cloud service to the data contributors. To solve this problem, combined with the role-based access control, attribute-based access control methods and using the architecture of next generation access control, a reverse hybrid access control method based on object attribute matching in cloud computing environment was proposed. Firstly, the access right level of the shared file was set by the data contributor, and the minimum weight of the access object was reversely specified. Then, the weight of each attribute was directly calculated by using the variation coefficient weighting method, and the process of policy rule matching in the attribute centered role-based access control was cancelled. Finally, the right value of the data contributor setting to the data file was used as the threshold for the data visitor to be allowed to access, which not only realized the data access control, but also ensured the protection of private data. Experimental results show that, with the increase of the number of visits, the judgment standards of the proposed method for malicious behaviors and insufficient right behaviors tend to be stable, the detection ability of the method becomes stronger and stronger, and the success rate of the method tends to a relatively stable level. Compared with the traditional access control methods, the proposed method can achieve higher decision-making efficiency in the environment of large number of user visits, which verifies the effectiveness and feasibility of the proposed method.
Reference | Related Articles | Metrics
Drug-target association prediction algorithm based on graph convolutional network
XU Guobao, CHEN Yuanxiao, WANG Ji
Journal of Computer Applications    2021, 41 (5): 1522-1526.   DOI: 10.11772/j.issn.1001-9081.2020081186
Abstract393)      PDF (892KB)(448)       Save
Traditional drug-target association prediction based on biological experiments is difficult to meet the demand of pharmaceutical research because its low efficiency and high cost. In order to solve the problem, a novel Graph Convolution for Drug-Target Interactions (GCDTI) algorithm was proposed. In GCDTI, the graph convolution and auto-encoder technology were combined by using semi-supervised learning to construct an encoding layer for integrating node features and a decoding layer for predicting full-link interactive networks respectively. At the same time, the graph convolution was used to build a latent factor model and effectively utilize the high-dimensional attribute information of drugs and targets for end-to-end learning. In this method, the input characteristic information was able to be combined with the known interaction network without preprocessing, which proved that the graph convolution layer of the model was able to effectively fuse the input data and node characteristics. Compared with other advanced methods, GCDTI has the highest prediction accuracy and average Area Under Receiver Operating Characteristic (ROC) Curve (AUC) (0.924 6±0.004 8), and has strong robustness. Experimental results show that GCDTI with the model architecture of end-to-end learning has the potential to be a reliable predictive method when large amounts of drug and target data need to be predicted.
Reference | Related Articles | Metrics
Supernetwork link prediction method based on spatio-temporal relation in location-based social network
HU Min, CHEN Yuanhui, HUANG Hongcheng
Journal of Computer Applications    2018, 38 (6): 1682-1690.   DOI: 10.11772/j.issn.1001-9081.2017122904
Abstract315)      PDF (1605KB)(433)       Save
The accuracy of link prediction in the existing methods for Location-Based Social Network (LBSN) is low due to the failure of integrating social factors, location factors and time factors effectively. In order to solve the problem, a supernetwork link prediction method based on spatio-temporal relation was proposed in LBSN. Firstly, aiming at the heterogeneity of network and the spatio-temporal relation among users in LBSN, the network was divided into four-layer supernetwork of "spatio-temporal-user-location-category" to reduce the coupling between the influencing factors. Secondly, considering the impact of edge weights on the network, the edge weights of subnets were defined and quantified by mining user influence, implicit association relationship, user preference and node degree information, and a four-layer weighted supernetwork model was built. Finally, on the basis of the weighted supernetwork model, the super edge as well as weighted super-edge structure were defined to mine the multivariate relationship among users for prediction. The experimental results show that, compared with the link prediction methods based on homogeneity and heterogeneity, the proposed method has a certain increase in accuracy, recall, F1-measure (F1) as well as Area Under the receiver operating characteristic Curve (AUC), and its AUC index is 4.69% higher than that of the link prediction method based on heterogeneity.
Reference | Related Articles | Metrics
Real-time crowd counting method from video stream based on GPU
JI Lina, CHEN Qingkui, CHEN Yuanjing, ZHAO Deyu, FANG Yuling, ZHAO Yongtao
Journal of Computer Applications    2017, 37 (1): 145-152.   DOI: 10.11772/j.issn.1001-9081.2017.01.0145
Abstract730)      PDF (1340KB)(630)       Save
Focusing on low counting accuracy caused by serious occlusions and abrupt illumination variations, a new real-time statistical method based on Gaussian Mixture Model (GMM) and Scale-Invariant Feature Transform (SIFT) features for video crowd counting was proposed. Firstly, the moving crowd were detected by using GMM-based motion segment method, and then the Gray Level Co Occurrence Matrix (GLCM) and morphological operations were applied to remove small moving objects of background and the dense noise in non-crowd foreground. Considering the high time-complexity of GMM algorithm, a novel parallel model with higher efficiency was proposed. Secondly, the SIFT feature points were acted as the basis of crowd statistics, and the execution time was reduced by using feature exaction based on binary image. Finally, a novel statistical analysis method based on crowd features and crowd number was proposed. The data sets with different level of crowd number were chosen to train and get the average feature number of a single person, and the pedestrians with different densities were counted in the experiment. The algorithm was accelerated by using multi-stream processors on Graphics Processing Unit (GPU) and the analysis about efficiently scheduling the tasks on Compute Unified Device Architecture (CUDA) streams in practical applications was conducted. The experimental results indicate that the speed is increased by 31.5% compared with single stream, by 71.8% compared with CPU.
Reference | Related Articles | Metrics
Random service system model based on UPnP service discovery
HU Zhikun, SONG Jingye, CHEN Yuan
Journal of Computer Applications    2016, 36 (3): 591-595.   DOI: 10.11772/j.issn.1001-9081.2016.03.591
Abstract501)      PDF (718KB)(547)       Save
In the automatic-discovery process of smart home network devices, serious jams occur due to randomly and independently choosing delay time to send service response message. In order to solve this problem, taking Universal Plug and Play (UPnP) service discovery protocol as an example, considering different demands of reliability and real-time performance, a random service system model based on UPnP service discovery was proposed. A profit-loss function including system response index and waiting index was designed. Finally, the relation between the best length of buffer queue and the profit-loss coefficient was obtained. Through the comparison of arrival time, departure time, waiting time and travel time with different buffer queue lengths, the necessity of designing profit-loss function and the feasibility of this proposed model are verified.
Reference | Related Articles | Metrics
Kinect depth image filtering algorithm based on joint bilateral filter
LI Zhifei CHEN Yuan
Journal of Computer Applications    2014, 34 (8): 2231-2234.   DOI: 10.11772/j.issn.1001-9081.2014.08.2231
Abstract824)      PDF (846KB)(745)       Save

Usually the depth image obtained by Kinect camera contains noise and black holes, so the effect is poor if it is directly applied into human motion tracking and recognition system. To solve this problem, an efficient depth image filtering algorithm based on joint bilateral filter was proposed. The principle of joint bilateral filtering was used in the proposed algorithm, and the depth and color images were captured by Kinect camera at the same time as the input. Spatial distance weight value of depth image and grayscale weight value of RGB color image were computed by Gaussian kernel function. Then these two weight values were multiplied to get the weight value of joint bilateral filter. A joint bilateral filter was designed by replacing the Gaussian kernel function with fast Gaussian transform. Finally, this filtered result was convolved with the noisy image to filter the Kinect depth image. The experimental results show that the proposed algorithm can significantly improve the robustness to noise in the human motion tracking and identification system and increase the recognition rate by 17.3%. The average running time of the proposed algorithm is 371ms, and is much lower than similar other algorithms. The proposed algorithm keeps the advantages of joint bilateral filter. Since the color image is introduced into the algorithm, the proposed algorithm can well repair the black holes while reducing the noise. The proposed algorithm is better than traditional bilateral filter and joint bilateral filter in denoising and repairing holes for the Kinect depth image, and it has higher real-time performance.

Reference | Related Articles | Metrics
Improved time synchronization algorithm for time division long term evolution system
TIAN Zengshan BO Chen YUAN Zheng-Wu
Journal of Computer Applications    2014, 34 (7): 1974-1977.   DOI: 10.11772/j.issn.1001-9081.2014.07.1974
Abstract191)      PDF (715KB)(376)       Save

To deal with high computing complexity and bad anti-CFO (anti-Carrier Frequency Offset) performance of conventional time synchronization algorithms for Time Division Long Term Evolution (TD-LTE) system, an improved algorithm based on Secondary Synchronization Signal (SSS) conjugate-symmetric in time domain was proposed in this paper. For the algorithm, SSS location was estimated as the peak of cross-correlation of received signal and its time reversal. And by combining SSS location with the detection of cell group ID, CP (Cyclic Prefix) type could also be judged. Analysis and simulation results demonstrate that the improved algorithm has low computing complexity, good performs on anti-CFO and better reliability compared with normal methods, especially, it also has good performs in multi-path channels. By applying to the third party TD-LTE UE detecting system, the algorithm is proved to be effective and feasible.

Reference | Related Articles | Metrics
Network mobility and fast handover scheme within PMIPv6
KONG Fanjie ZHANG Qizhi RAO Liang CHEN Yuan
Journal of Computer Applications    2013, 33 (06): 1495-1504.   DOI: 10.3724/SP.J.1087.2013.01495
Abstract831)      PDF (873KB)(658)       Save
To solve the long handover latency of mobile network in NEtwork MObility (NEMO) Basic Support (NBS),this paper proposed a scheme for NEMO within PMIPv6. Then an improved handover procedure for this scheme was suggested. The proposed scheme, which decreased the number of handover messages transmitted on wireless link and in advance set up the tunnel to forward packets, achieved fast handover of mobile network. Compared with NBS’s handover procedure, the analytical results show that the standard handover procedure and fast handover procedure of the proposed scheme decrease handover latency by 56.55% and 58.63% respectively.
Reference | Related Articles | Metrics
New colorful images segmentation algorithm based on level set
CHEN Yuan-tao XU Wei-hong WU Jia-ying
Journal of Computer Applications    2012, 32 (03): 749-751.   DOI: 10.3724/SP.J.1087.2012.00749
Abstract914)      PDF (641KB)(563)       Save
Since the functional form in consideration is of non-convex variational nature, the calculation results of the image segmentation model often fall into local minimum. Based on the global vector-valued image segmentation of active contour, the global vector-valued image segmentation and image denoising were integrated in a new variational form within the framework of global minimum. The new model was easy to construct and of less computation. Compared to the classical level set method, tedious repetition of the level set could be avoided. With the analyses on artificial images and real images, the new method is verified to have better segmentation results.
Reference | Related Articles | Metrics